Search Results: "vasudev"

13 December 2015

Vasudev Kamath: Mount ordering and automount options in systemd

Just to give a brief recap of why I'm writing this post, I've to describe a brief encounter I had with systemd which rendered my system unbootable. I first felt its systemd's problem but later figured out it was my mistake. So below is brief story divided into 2 sections Problem and Solution. Here Problem describes issue I faced with systemd and Solution discusses the .mount and .automount suffix files used by systemd.
Problem I have several local bin folders and I don't want to modify PATH environment variable adding every other folder path. I first started using aufs as alternative to symlinks or modifying PATH variable. Recently I learnt about much more light weight union fs which is in kernel, overlayfs. And Google led me to Arch wiki article which showed me a fstab entry like below
overlay        /usr/local/bin        overlay noauto,x-systemd.automount,lowerdir=/home/vasudev/Documents/utilities/bin,upperdir=/home/vasudev/Documents/utilities/jonas-bin,workdir=/home/vasudev/Documents/utilities/bin-overlay    0       0
And after adding this entry, on next reboot systemd is not able to mount my LVM home and swap partition. It does however mount root partition. It gives me emergency target but login never returns. So without any alternative I had to reinstall the system. But funnily enough I started encountering the same issue (yeah after I had added above entry into fstab). Yeah that time I never thought that its culprit. My friend Ritesh finally got my system booting after the weird x-systemd.automount option. I never investigated further that time on why this problem occurred.
Solution After re-encounter with similar problem and some other project I read the manual on systemd.mount, systemd-fstab-generator and systemd.automount and I had some understanding of what really went wrong in my case above. So now let us see what really is happening. All the above happened because, systemd translates the /etc/fstab into .mount units at run time using systemd-fstab-generator. Every entry in fstab translates into a file named after the mount point. The / in the path of the mount point is replaced by a -. So / mount point is named as -.mount and /home is named as home.mount and /boot becomes boot.mount. All these files can be seen in directory /run/systemd/generator. And all these mount points are needed by local-fs.target , if any of these mount points fail, local-fs.target fails. And if local-fs.target fails it will invoke emergency.target. systemd-fstab-generator manual suggests the ordering information in /etc/fstab is discarded. That means if you have union mounts, bind mounts or fuse mounts in the fstab which is normally at end of fstab and uses path from /home or some other device mount points it will fail to get mounted. This happens because systemd-fstab-generator didn't consider the ordering in fstab. This is what happened in my case my overlay mount point which is usr-local-bin.mount was some how happened to be tried before home.mount because there is no explicit dependency like Requires= or After= was declared. When systemd tried to mount it the path required under /home were not really present hence failed which in return invoked emergency.target as usr-local-bin.mount is Requires= dependency for local-fs.target. Now what I didn't understand is why emregency.target never gave me the root shell after entering login information. I feel that this part is unrelated to the problem I described above and is some other bug. To over come this what we can do is provided systemd-fstab-generator some information on dependency of each mount point. systemd.mount manual page suggests several such options. One which I used in my case is x-systemd.requires which should be placed in options column of fstab and specify the mount point which is needed before it has to be mounted. So my overlay fs entry translates to something like below
overlay        /usr/local/bin        overlay noauto,x-systemd.requires=/home,x-systemd.automount,lowerdir=/home/vasudev/Documents/utilities/bin,upperdir=/home/vasudev/Documents/utilities/jonas-bin,workdir=/home/vasudev/Documents/utilities/bin-overlay   0       0
There is another special option called x-systemd.automount this will make systemd-fstab-generator to create a .automount file for this mount point. What does systemd.automount do? It achieves on-demand file mounting and parallelized file system mounting. Something similar to systemd.socket the file system will be mounted when you access the mount point for first time. So now if you try to see dependency of usr-local-bin.mount you will see following.
systemctl show -p After usr-local-bin.mount
After=-.mount systemd-journald.socket local-fs-pre.target system.slice usr-local-bin.automount
This means now usr-local-bin.mount depends on usr-local-bin.automount. And let us see what usr-local-bin.automount needs.
systemctl show -p Requires usr-local-bin.automount
Requires=-.mount home.mount
systemctl show -p After usr-local-bin.automount
After=-.mount home.mount
So clearly usr-local-bin.automount is activated after -.mount and home.mount is activated. Similar can be done for any bind mounts or fuse mounts which require other mount points before it can be mounted. Also note that x-systemd.autmount is not mandatory option for declaring dependencies, I used it just to make sure /usr/local/bin is mounted only when its really needed.
Conclusion A lot of traditional way has been changed by systemd. I never understood why my system failed to boot in first place this happened because I was not really aware of how systemd works and was trying to debug the problem with traditional approach. So there is really a learning curve involved for every sysadmin out there who is going to use systemd. Most of them will read documentation before hand but others like me will learn after having a situation like above. :-). So systemd interfering into /etc/fstab is good?. I don't know but since systemd parallelizes the boot procedure something like this is really needed. Is there a way to make systemd not touch /etc/fstab?. Yes there is you need to pass fstab=0 option in kernel command line and systemd-fstab-generator doesn't create any .mount or .swap files from your /etc/fstab. !NB Also it looks like x-systemd.requires option was introduced recently and is not available in systemd <= 215 which is default in Jessie. So how do you declare dependencies in Jessie system?. I don't have any answer!. I did read that x-systemd.automount which is available in those versions of systemd can be used, but I'm yet to experiment this. If it succeeds I will write a post on it.

9 November 2015

Vasudev Kamath: Forwarding host port to service in container

Problem I've lxc container running distcc daemon and I would like to forward the distcc traffic coming to my host system to container.
Solution Following simple script which uses iptable did the job.
 #!/bin/sh
 set -e
usage()  
    cat <<EOF
    $(basename $0) [options] <in-interface> <out-interface> <port> <destination>
    --clear           Clear the previous rules before inserting new
                      ones
    --protocol        Protocol for the rules to use.
    in-interface      Interface on which incoming traffic is expected
    out-interface     Interface to which incoming traffic is to be
                      forwarded.
    port              Port to be forwarded. Can be integer or string
                      from /etc/services.
    destination       IP and port of the destination system to which
                      traffic needs to be forwarded. This should be in
                      form <destination_ip:port>
(C) 2015 Vasudev Kamath - This program comes with ABSOLUTELY NO
WARRANTY. This is free software, and you are welcome to redistribute
it under the GNU GPL Version 3 (or later) License
EOF
 
setup_portforwarding ()  
    local protocol="$1"
    iptables -t nat -A PREROUTING -i $IN_INTERFACE -p "$protocol" --dport $PORT \
           -j DNAT --to $DESTINATION
    iptables -A FORWARD -p "$protocol" -d $ DESTINATION%%\:*  --dport $PORT -j ACCEPT
    # Returning packet should have gateway IP
    iptables -t nat -A POSTROUTING -s $ DESTINATION%%\:*  -o \
           $IN_INTERFACE -j SNAT --to $ IN_IP%%\/* 
 
if [ $(id -u) -ne 0 ]; then
    echo "You need to be root to run this script"
    exit 1
fi
while true; do
    case "$1" in
      --clear)
          CLEAR_RULES=1
          shift
          ;;
      --protocol --protocol=*?)
      if [ "$1" = "--protocol" -a -n "$2" ];then
          PROTOCOL="$2"
          shift 2
      elif [ "$ 1#--protocol= " != "$1" ]; then
          PROTOCOL="$ 1$--protocol= "
          shift 1
      else
          echo "You need to specify protocl (tcp udp)"
          exit 2
      fi
      ;;
      *)
          break
          ;;
    esac
done
if [ $# -ne 4 ]; then
    usage $0
    exit 2
fi
IN_INTERFACE="$1"
OUT_INTERFACE="$2"
PORT="$3"
DESTINATION="$4"
# Get the incoming interface IP. This is used for SNAT.
IN_IP=$(ip addr show $IN_INTERFACE \
             perl -nE '/inet\s(.*)\sbrd/ and print $1')
if [ -n "$CLEAR_RULES" ]; then
    iptables -t nat -X
    iptables -t nat -F
    iptables -F
fi
if [ -n "$PROTOCOL" ]; then
    setup_portforwarding $PROTOCOL
else
    setup_portforwarding tcp
    setup_portforwarding udp
fi
Coming to systemd-nspawn I see there is --port option which takes argument in form proto:hostport:destport where proto can be either tcp or udp, hostport and destport are number from 1-65535. This option assumes private networking enabled in the container. I've not tried this option yet but that simplifies quite a lot of thing, its like -p switch used by docker.

Vasudev Kamath: Taming systemd-nspawn for running containers

I've been trying to run containers using systemd-nspawn for quite some time. I was always bumping to one or other dead end. This is not systemd-nspawn's fault, rather my impatience stopping me from reading manual pages properly lack of good tutorial like article available online. Compared to this LXC has a quite a lot of good tutorials and howto's available online. This article is my effort to create a notes putting all required information in one place.
Creating a Debian Base Install First step is to have a minimal Debian system some where on your hard disk. This can be easily done using debootsrap. I wrote a custom script to avoid reading manual every time I want to run debootstrap. Parts of this script (mostly packages and the root password generation) is stolen from lxc-debian template provided by lxc package.
#!/bin/sh
set -e
set -x
usage ()  
    echo "$ 0##/*  [options] <suite> <target> [<mirror>]"
    echo "Bootstrap rootfs for Debian"
    echo
    cat <<EOF
    --arch         set the architecture to install
    --root-passwd  set the root password for bootstrapped rootfs
    EOF
 
# copied from the lxc-debian template
packages=ifupdown,\
locales,\
libui-dialog-perl,\
dialog,\
isc-dhcp-client,\
netbase,\
net-tools,\
iproute,\
openssh-server,\
dbus
if [ $(id -u) -ne 0 ]; then
    echo "You must be root to execute this command"
    exit 2
fi
if [ $# -lt 2 ]; then
   usage $0
fi
while true; do
    case "$1" in
        --root-passwd --root-passwd=?*)
            if [ "$1" = "--root-passwd" -a -n "$2" ]; then
                ROOT_PASSWD="$2"
                shift 2
            elif [ "$1" != "$ 1#--root-passwd= " ]; then
                ROOT_PASSWD="$ 1#--root-passwd= "
                shift 1
            else
                # copied from lxc-debian template
                ROOT_PASSWD="$(dd if=/dev/urandom bs=6 count=1 2>/dev/null base64)"
                ECHO_PASSWD="yes"
            fi
            ;;
        --arch --arch=?*)
            if [ "$1" = "--arch" -a -n "$2" ]; then
                ARCHITECTURE="$2"
                shift 2
            elif [ "$1" != "$ 1#--arch= " ]; then
                ARCHITECTURE="$ 1#--arch= "
                shift 1
            else
                ARCHITECTURE="$(dpkg-architecture -q DEB_HOST_ARCH)"
            fi
            ;;
        *)
            break
            ;;
    esac
done
release="$1"
target="$2"
if [ -z "$1" ]   [ -z "$2" ]; then
    echo "You must specify suite and target"
    exit 1
fi
if [ -n "$3" ]; then
    MIRROR="$3"
fi
MIRROR=$ MIRROR:-http://httpredir.debian.org/debian 
echo "Downloading Debian $release ..."
debootstrap --verbose --variant=minbase --arch=$ARCHITECTURE \
             --include=$packages \
             "$release" "$target" "$MIRROR"
if [ -n "$ROOT_PASSWD" ]; then
    echo "root:$ROOT_PASSWD"   chroot "$target" chpasswd
    echo "Root password is '$ROOT_PASSWRD', please change!"
fi
It just gets my needs done, if you don't like it feel free to modify or use debootstrap directly. !NB Please install dbus package in the minimal base install, otherwise you will not be able to control the container using machinectl
Manually Running Container and then persisting it Next we need to run the container manually. This is done by using following command.
systemd-nspawn -bD   /path/to/container --network-veth \
     --network-bridge=natbr0 --machine=Machinename
--machine option is not mandatory, if not specified systemd-nspawn will take the directory name as machine name, and if you have characters like - in the directory name it translates to hexcode x2d and controlling container with name becomes difficult. --network-veth specifies the systemd-nspawn to enable virtual ethernet based networking and --network-bridge tells the bridge interface on host system to be used by systemd-nspawn. These options together constitutes private networking for container. If not specified container can use host systems interface there by removing network isolation of container. Once you run this command container comes up. You can now run machinectl to control the container. Container can be persisted using following command
machinectl enable container-name
This will create a symbolic link of /lib/systemd/system/systemd-nspawn@service to /etc/systemd/system/machine.target.wants/. This allows you to start or stop container using machinectl or systemctl command. Only catch here is your base install should be in /var/lib/machines/. What I do in my case is create a symbolic link from my base container to /var/lib/machines/container-name. !NB Note that symbolic link name under /var/lib/machines should be same as the container name you gave using --machine switch or the directory name if you didn't specify --machine
Persisting Container Networking We did persist the container in above step, but this doesn't persist the networking options we provided in command line. systemd-nspawn@.service provides following command to invoke container.
ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot --link-journal=try-guest --network-veth --settings=override --machine=%I
To persist the bridge networking configuration we did in command line, we need the help of systemd-networkd. So first we need to enable the systemd-networkd.service on both container and the host system.
systemctl enable systemd-networkd.service
Now inside the container, interfaces will be named as hostN. Depending on how many interfaces we have N increments. In our example case we had single interface, hence it will named as host0. By default network interfaces will be down inside container, hence systemd-networkd is needed to put it up. We put the following in /etc/systemd/network/host0.network file inside the container.
[Match]
Name=host0
[Network]
Description=Container wired interface host0
DHCP=yes
And in the host system we just configure the bridge interface using systemd-nspawn. I put following in natbr0.netdev in /etc/systemd/network/
[NetDev]
Description=Bridge natbr0
Name=natbr0
Kind=bridge
In my case I already had configured the bridge using /etc/network/interfaces file for lxc. I think its not really needed to use systemd-networkd in this case. Since systemd-networkd doesn't do anything if network / virtual device is already present I safely put above configuration and enabled systemd-networkd. Just for the notes here is my natbr0 configuration in interfaces file.
auto natbr0
iface natbr0 inet static
   address 172.16.10.1
   netmask 255.255.255.0
   pre-up brctl addbr natbr0
   post-down brctl delbr natbr0
   post-down sysctl net.ipv4.ip_forward=0
   post-down sysctl net.ipv6.conf.all.forwarding=0
   post-up sysctl net.ipv4.ip_forward=1
   post-up sysctl net.ipv6.conf.all.forwarding=1
   post-up iptables -A POSTROUTING -t mangle -p udp --dport bootpc -s 172.16.0.0/16 -j CHECKSUM --checksum-fill
   pre-down iptables -D POSTROUTING -t mangle -p udp --dport bootpc -s 172.16.0.0/16 -j CHECKSUM --checksum-fill
Once this is done just reload the systemd-networkd and make sure you have dnsmasq or any other DHCP server running in your system. Now the last part is to tell systemd-nspawn to use the bridge networking interface we have defined. This is done using container-name.nspawn file. Put this file under /etc/systemd/nspawn folder.
[Exec]
Boot=on
[Files]
Bind=/home/vasudev/Documents/Python/upstream/apt-offline/
[Network]
VirtualEthernet=yes
Bridge=natbr0
Here you can specify networking, and files mounting section of the container. For full list please refer the systemd.nspawn manual page. Now all this is done you can happily do
machinectl start container-name
#or
systemctl start systemd-nspawn@container-name
Resource Control Now all things said and done, one last part remains. Yes what is the point if we can't control how much resource does the container use. Atleast it is more important for me, because I use old and bit low powered laptop. Systemd provides way to control the resource using Control interface. To see all the the interfaces exposed by systemd please refer systemd.resource-control manual page. The way to control the resource is using systemctl. Once container starts running we can run following command.
systemctl set-property container-name CPUShares=200 CPUQuota=30% MemoryLimit=500M
The manual page does say that these things can be put under [Slice] section of unit files. Now I don't have clear idea if this can be put under .nspawn files or not. For the sake of persisting the container I manually wrote the service file for container by copying systemd-nspawn@.service and adding [Slice] section. But if I don't know how to find out if this had any effect or not. If some one knows about this please share your suggestions to me and I will update this section with your provided information.
Conclussion All in all I like systemd-nspawn a lot. I use it to run container for development of apt-offline. I previously used lxc where all can be controlled using a single config file. But I feel systemd-nspawn is more tightly integrated with system than lxc. There is definitely more in systemd-nspawn than I've currently figured out. Only thing is its not as popular as other alternatives and definitely lacks good howto documentation.For now only way out is dig the manual pages, scratch your head, pull your hair out and figure out new possibilities in systemd-nspawn. ;-)

21 February 2015

Vasudev Kamath: Running Plan9 using 9vx - using vx32 sandboxing library

Now a days I'm more and more attracted towards Plan9, an Operating System meant to be the successor of UNIX and created by same people who created original UNIX. I'm always baffled by the simplicity of Plan9. Sadly Plan9 never took off for whatever reasons. I've been for a while trying to run Plan9, I ran Plan9 on Raspberry Pi model B using 9pi, but I couldn't experiment with it more due to some restrictions in my home setup. I installed original Plan9 4th Edition from Bell labs (now part of Alcatel-Lucent), I will write about it in on different post. But running virtual machine on my system is again PITA as system is already old (3 and half year). I came across the 9vx which is port of Plan9 for FreeBSD, Linux and Mac OSX by Russ Cox. I downloaded original 9vx version 0.9.12 from Russ's page linked above. The archive contains a Plan9 rootfs along with precompiled 9vx binaries for Linux, FreeBSD and Mac OS X. I ran the Linux binary but it crashed.
./9vx.Linux -u glenda
I was seeing some illegal instruction error in dmesg. I didn't bother to do more investigation. A bit of googling showed me Arch Linux's wiki page on 9vx. I got errors trying to compile the original vx32 from rsc's repository but later saw that AUR 9vx package is built from different repository forked from rsc's found here. I cloned the repository to local and compiled it, I don't really remember if I had installed any additional packages. But if you get error you will know what additional thing is required. After compilation the 9vx binary is found inside src/9vx/9vx. I used this newly compiled 9vx to run the the rootfs I downloaded from Russ's website.
9vx -u glenda -r /path/to/extracted/9vx-0.9.12/
This launches Plan9 and allows you to work inside Plan9. The good part is its not resource hungry and still looks like you have a VM running with Plan9 on it. But there seems to be a better way to do this directly from plan9 iso from bell labs. It can be found on 9fans list. Now I'm going to try that out too :-). And in next post I will share my experience of using Plan9 on Qemu.

26 December 2014

Vasudev Kamath: Notes: LXC How-To

LXC - Linux Containers allows us to run multiple isolated Linux system under same control host. This will be useful for testing application without changing our existing system. To create an LXC container we use the lxc-create command, it can accepts the template option, with which we can choose the OS we would like to run under the virtual isolated environment. On a Debian system I see following templates supported
[vasudev@rudra: ~/ ]% ls /usr/share/lxc/templates
lxc-alpine*    lxc-archlinux*  lxc-centos*  lxc-debian*    lxc-fedora*  lxc-openmandriva*  lxc-oracle*  lxc-sshd*    lxc-ubuntu-cloud*
lxc-altlinux*  lxc-busybox*    lxc-cirros*  lxc-download*  lxc-gentoo*  lxc-opensuse*      lxc-plamo*   lxc-ubuntu*
For my application testing I wanted to create a Debian container for my By default the template provided by lxc package creates Debian stable container. This can be changed by passing the option to debootstrap after -- as shown below.
sudo MIRROR=http://localhost:9999/debian lxc-create -t debian \
     -f   container.conf -n container-name -- -r sid
-r switch is used to specify the release, MIRROR environment variable is used to choose the required Debian mirror. I wanted to use my own local approx installation, so I can save some bandwidth. container.conf is the configuration file used for creating the LXC, in my case it contains basic information on how container networking should b setup. The configuration is basically taken from LXC Debian wiki
lxc.utsname = aptoffline-dev
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0.1
lxc.network.name = eth0
lxc.network.ipv4 = 192.168.3.2/24
lxc.network.veth.pair = vethvm1
I'm using VLAN setup described in Debian wiki: LXC VLAN Networking page. Below is my interfaces file.
iface eth0.1 inet manual
iface br0.1  inet manual
   bridge_ports eth0.1
   bridge_fd 0
   bridge_maxwait 0
Before launching the LXC make sure you run below command
sudo ifup eth0.1
sudo ifup br0.1
# Also give ip to bridge in same subnet as lxc.network.ipv4
sudo ip addr add 192.168.3.1/24 dev br0.1
I'm giving ip address to bridge so that I can communicate with container from my control host, once it comes up. Now start the container using below command
sudo lxc-start -n container -d -c tty8
We are starting lxc in daemon mode and attaching it to console tty8. If you want, you can drop -d and -c option to start lxc in foreground. But its better you start it in background and attach using lxc-console command shown below.
sudo lxc-console -n container -t tty8
You can detach from console using Ctrl+a q combination and let lxc execute in background. Its also possible to simply ssh into the running container since we have enabled networking. Stopping the container should be done using lxc-stop command, but without -k switch (kill) this command never returned. Even with timeout container is not stopped.
sudo lxc-stop -n container
-r can be used for reboot of container. Since I couldn't get clean shutdown I normally attach the console and issue a halt command in container itself. Not sure if this is the right way, but it gets the thing done. I consider Linux container as a better alternative for spawning a virtual Linux environment instead of running a full blown VM like Virtualbox or VMware.

5 October 2014

Vasudev Kamath: Note to Self: LVM Shrink Resize HowTo

Recently I had to reinstall a system at office with Debian Wheezy and I thought I should use this opportunity to experiment with LVM. Yeah I've not used LVM till date, even though I'm using Linux for more than 5 years now. I know many DD friends who use LVM with LUKS encryption and I always wanted to experiment, but since my laptop is only thing I've and its currently perfectly in shape I didn't dare to experiment it there. This reinstall was golden opportunity for me to experiment and learn something new. I used Wheezy CD ISO downloaded using jigdo for installation. Now I will just go bit off topic and want to share the USB stick preparation. I have to say this because I had not done installation for quite a while now. Last I did was during Squeeze time so like usual I blindly executed following command.
cat debian-wheezy.iso > /dev/sdb
Surprisingly USB stick didn't boot! I was getting Corrupt or missing ISO.bin. So next I tried using dd for preparing.
dd if=debian-wheezy.iso of=/dev/sdb
Surprisingly this also didn't work and I get same error message as above. This is when I went back to debian manual and looked for installation step and there I found new way!
cp debian-wheezy.iso /dev/sdb
Look at destination, its a device and voil this worked! This is something new I learnt and I'm surprised how easy it is now to prepare USB stick. But I still didn't get why first 2 methods failed!. If you guys know please do share. Now coming back to LVM. I used default LVM when disk partitioning was asked, and I used guided partitioning method provided by debian-installer and ended up with following layout
$ lvs
  LV     VG        Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
home   system-disk -wi-ao-- 62.34g
root   system-disk -wi-ao--  9.31g
swap_1 system-disk -wi-ao--  2.64g
So guided partitioning of debian-installer allocates 10G for root and rest to home and swap. This is not a problem but when I started installing required software, I could see root running out of space quickly so I wanted to resize root and give it 10G more, for this I need to reduce the home by 10G for which I need to first unmount the home partition. Unmounting home from running system isn't possible so I booted into recovery assuming I can unmount home there but I couldn't. lsof didn't show any one using /home after searching a bit I found fuser command and it looks like kernel is using /home which is mounted by it.
$ fuser -vm /home
                     USER        PID ACCESS COMMAND
/home:               root     kernel mount /home
So it isn't possible to unmount /home in recovery mode also. Online materials told me to use live-cd for doing this but I didn't have patience to do that so I just went ahead commented /home mounting in /etc/fstab and rebooted!. This time it worked and /home is not mounted on recovery mode. Now comes the hard part resizing home, thanks to TLDP doc on reducing I coud do this with following step
# e2fsck -f /dev/volume-name/home
# resize2fs /dev/volume-name/home 52G
# lvreduce -L-10G /dev/volume-name/home
And now the next part live extending the root partition again thanks to TLDP doc on extending following command did it.
# lvextend -L+10G /dev/volume-name/root
# resize2fs /dev/volumne-name/root
And now important part! Uncomment /home line in /etc/fstab so it will be mounted normally in next boot and reboot! On login I can see my partitions updated.
# lvs
  LV     VG        Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
home   system-disk -wi-ao-- 52.34g
root   system-disk -wi-ao-- 19.31g
swap_1 system-disk -wi-ao--  2.64g
I've started liking LVM more now! :)

19 July 2014

Vasudev Kamath: Stop messing with my settings Network Manager

I use a laptop with Atheros wifi card with ath9k driver. I use hostapd to convert my laptop wifi into AP (Access point) so I can share network with my Nexus 7 and Kindle. This has been working fine for quite some time till my recent update. After recent system update (I use Debian Sid), I couldn't for some reason convert my wifi into AP so my device can connect. I can't find anything in log nor in hostapd debug messages which is useful to trouble shoot the issue. Every time I start the laptop my wifi card will be blocked by RF-KILL and I have manually unblock (both hard and soft). The script which I use to convert my Wifi into AP is below
#Initial wifi interface configuration
ifconfig "$1" up 192.168.2.1 netmask 255.255.255.0
sleep 2
# start dhcp
sudo systemctl restart dnsmasq.service
iptables --flush
iptables --table nat --flush
iptables --delete-chain
iptables -t nat -A POSTROUTING -o "$2" -j MASQUERADE
iptables -A FORWARD -i "$1" -j ACCEPT
sysctl -w net.ipv4.ip_forward=1
#start hostapd
hostapd /etc/hostapd/hostapd.conf  &> /dev/null &
I tried rebooting the laptop and for some time I managed to convert my wifi into AP, I noticed at same time that Network Manager is not started once laptop is booted, yeah this also started happening after recent upgrade which I guess is the black magic of systemd. After some time I noticed wifi has went down and now I can't bring it up because its blocked by RF-KILL. After checking the syslog I noticed following lines.
Jul 18 23:09:30 rudra kernel: [ 1754.891060] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): using nl80211 for WiFi device control
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): driver supports Access Point (AP) mode
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): new 802.11 WiFi device (driver: 'ath9k' ifindex: 10)
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): exported as /org/freedesktop/NetworkManager/Devices/8
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): device state change: unmanaged -> unavailable (reason 'managed') [10 20 2]
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): preparing device
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> devices added (path: /sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0/net/mon.wlan0, iface: mon.wlan0)
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> device added (path: /sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0/net/mon.wlan0, iface: mon.wlan0): no ifupdown configuration found.
Jul 18 23:09:33 rudra ModemManager[891]: <warn>  Couldn't find support for device at '/sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0': not supported by any plugin
Well I couldn't figure out much but it looks like NetworkManager has come up and after seeing interface mon.wlan0, a monitoring interface created by hostapd to monitor the AP goes mad and tries to do something with it. I've no clue what it is doing and don't have enough patience to debug that. Probably some expert can give me hints on this. So as a last resort I purged the NetworkManager completely from the system and settled back to good old wicd and rebooted the system. After reboot wifi card is happy and is not blocked by RF-KILL and now I can convert it AP and use it as long as I wish without any problems. Wicd is not a great tool but its good enough to get the job done and does only what is asked to it unlike the NetworkManager. So in short
NetworkManager please stop f***ing with my settings and stop acting oversmart.

7 June 2014

Vasudev Kamath: Exposing function in python module using entry_points, WSME in a Flask webapp

The heading might be ambiguous, but I couldn't figure out better heading so let me start by explaining what I'm trying to solve here.
Problem I have a python module which contains a function which I want to expose as a REST web service in a Flask application. I use WSME for Flask application, which actually needs signature of function in question and problem comes to picture because function to be exposed is foreign to Flask application, it resides in separate python module.
Solution While reading Julien Danjou's Hackers Guide To Python book I came across the setuptools entry_points concept which can be used to extend existing feature of a tool like plug-ins. So here I'm going to use this entry_points feature from setuptools to provide function in the module which can expose the signature of function[s] to be exposed through REST. Of course this means I need to modify module in question to write entry_points and function for giving out signature of function to be exposed. I will explain this with small example. I have a dummy module which provides a add function and a function which exposes the add functions signature.
def add(a, b):
    return a + b
def expose_rest_func():
    return [add, int, int, int]
This is stored in dummy/__init__.py file. I use pbr tool to package my python module. Below is content of setup.cfg file.
[metadata]
name = dummy
author = Vasudev Kamath
author-email = kamathvasudev@gmail.com
summary = Dummy module for testing purpose
version = 0.1
license = MIT
description-file =
  README.rst
requires-python = >= 2.7
[files]
packages =
  dummy
[entry_points] =
myapp.api.rest =
  rest = dummy:expose_rest_func
The special thing in above file is entry_points section, which defines function to be hooked into entry_point. In our case entry_point myapp.api.rest is used by our Flask application to interact with modules which expose them. The function which will be got accessing the entry_point is expose_rest_func which gives the function to be exposed its arg types and return types as a list. If we are looking at only supporting python3 it was sufficient to know function name only and use function annotations in function definition. Since I want to support both python2 and python3 this is out of question. Now, just run the following command in virtualenv to get the module installed.
PBR_VERSION=0.1 python setup.py sdist
pip install dist/dummy_module-0.1.tar.gz
Now if you want to see if the module is exposing entry_point or not just use entry_point_inspector tool after installing you will get a command called epi if you run it as follows you should note the dummy_module in its output
epi group list
+-----------------------------+
  Name                         
+-----------------------------+
  cliff.formatter.completion   
  cliff.formatter.list         
  cliff.formatter.show         
  console_scripts              
  distutils.commands           
  distutils.setup_keywords     
  egg_info.writers             
  epi.commands                 
  flake8.extension             
  setuptools.file_finders      
  setuptools.installation      
  myapp.api.rest               
  stevedore.example.formatter  
  stevedore.test.extension     
  wsme.protocols               
+-----------------------------+
So our entry_point is exposed now, we need to access it in our Flask application and expose the function using WSME. It is done by below code.
from wsmeext.flask import signature
import flask
import pkg_resources
def main():
   app = flask.Flask(__name__)
   app.config['DEBUG'] = True
   for entrypoint in pkg_resources.iter_entry_points('myapp.api.rest'):
       # Ugly but fix is only supporting python3
       func_signature = entrypoint.load()()
       app.route('/' + func_signature[0].__name__, methods=['POST'])(
           signature(func_signature[-1],
               *func_signature[1:-1])(func_signature[0]))
   app.run()
if __name__ == '__main__':
    main()
entry_point myapp.api.rest are iterated using the pkg_resources package provided by setuptools, when I load the entry_point I get back the function to be used which is called in same place to get function signature. Then I'm calling Flask and WSME decorator functions (yeah instead of decorating I'm using them directly over function to be exposed). The code looks bit ugly at the place where I'm accessing list using slices but I can't help it due to limitation of python2 with python3 there is new packing and unpacking stuff which makes code look bit more cooler see below.
from wsmeext.flask import signature
import flask
import pkg_resources
def main():
    app = flask.Flask(__name__)
    app.config['DEBUG'] = True
    for entrypoint in pkg_resources.iter_entry_points('silpa.api.rest'):
        func, *args, rettype = entrypoint.load()()
        app.route('/' + func.__name__, methods=['POST'])(
        signature(rettype, *args)(func_signature[0]))
    app.run()
if __name__ == '__main__':
    main()
You can access the service at http://localhost:5000/add depending on Accept header of HTTP you will get either XML or JSON response. If you access it from browser you will get XML response.
Usecase Now if you are wondering what is the reason behind this experience, this is for SILPA Project. I'm trying to implement REST service for all Indic language computing module. Since all these module are independent of SILPA which is a Flask web app I had to find a way to achieve this, and this is what I came up with.
Conclusion I'm not sure if there is any other approaches to achieve this, if there I would love to hear about them. You can write your comments and suggestion over email

3 June 2014

Vasudev Kamath: Using WSME with Flask microframework

After reading Julien Danjou's I found out WSME (Web Service Made Easy) a Python framework which allows us to easily create Web Services in Python. For SILPA we needed a REST like interface and I thought of giving it try as WSME readily advertised the Flask integration, and this post was born when I read the documentation for Flask integration. First of all Flask is a nice framework which will right way allow development of REST api for simple purposes, but my requirement was bit complicated where I had to expose function in a separate python modules through SILPA. I think detailed requirement can be part of another post, so let me explain how to use WSME with Flask app. WSME integration with Flask is done via decorator function wsmeext.flask.signature which expects you to provide it with signature of function to expose. And here is its documentation, basically signature of signature function is
wsmeext.flask.signature(return_type, *arg_types, **options)
Yeah thats all docs have sadly. So basically exposing is the only thing WSME handles for us here, routing and other stuffs need to be done by Flask itself. So lets consider a example, simple function to add as shown below.
def add(a, b):
    return a + b
For providing REST like service, all you need below code.
from flask import Flask
from wsmeext.flask import signature
app = Flask(__name__)
@app.route('/add')
@signature(int, int, int)
def add(a, b):
    return a + b
if __name__ == '__main__':
       app.run()
So first argument to signature is return type of function and rest arguments are the argument to function to be exposed. Now you can access the newly exposed service by visiting http://localhost:5000/add but don't forget to pass the arguments either via query string or through post. You can restrict methods of access via Flask's route. So what's the big deal of not having docs right?.. Well fun part began when we use bit more complex return type like dictionaries or lists . Below is modified code I'm using to demonstrate problem I faced during using dict as return type.
from flask import Flask
from wsmeext.flask import signature
app = Flask(__name__)
@app.route('/add')
@signature(dict, int, int)
def add(a, b):
    return  "result": a + b 
if __name__ == '__main__':
    app.run()
Basically I'm returning a dictionary containing result now, for demonstration purpose. When I run the application boom python barked at me with following message.
Traceback (most recent call last):
File "wsme_dummy.py", line 7, in <module>
 @signature(dict, int, int)
File "c:\Users\invakam2\.virtualenvs\wsmetest\lib\site-packages\wsmeext\flask.py", line 48, in decorator
 funcdef.resolve_types(wsme.types.registry)
File "c:\Users\invakam2\.virtualenvs\wsmetest\lib\site-packages\wsme\api.py", line 109, in resolve_types
 self.return_type = registry.resolve_type(self.return_type)
File "c:\Users\invakam2\.virtualenvs\wsmetest\lib\site-packages\wsme\types.py", line 739, in resolve_type
 type_ = self.register(type_)
File "c:\Users\invakam2\.virtualenvs\wsmetest\lib\site-packages\wsme\types.py", line 668, in register
 class_._wsme_attributes = None
TypeError: can't set attributes of built-in/extension type 'dict'
After going through code from files involved in above traces this is what I found
  1. wsmeext.flask.signature inturn uses wsme.signature which is just alias of wsme.api.signature.
  2. Link in documentation in sentence See @signature for parameter documentation is broken and should actually link to wsme.signature in docs.
  3. wsme.signature actually calls resolve_type to check on types of return and arguments. This function checks if types are instance of dict or list in such cases it creates instances of wsme.type.DictType and wsme.type.ArrayType respectively with values from the argument.
  4. When I just passed built-in type dict the control went to else part which just passed the type to wsme.type.Register.registry function which tries to set the attribute _wsme_attribute which actually raises TypeError as we can't set attribute for built-in types.
So by inspecting code of wsme.type.Registry.resolve_type and wsme.type.Registry.register its clear that what signature expects when arguments or return type is dictionary/list is instance of dictionary/list with type of value in it. May be sentence is bit vague but I'm not sure how to put it more clearly, as an example in our case add function returns dictioanry with key as string and value as int, so return type argument for signature will be str: int . Similarly if you return array with int values it will be [int]. With above understanding our add function now looks like below.
@signature( str: int , int, int)
def add(a, b):
   return  'result': a + b 
and now code worked just fine!. What I couldn't figure out here is there is no way to have tuple as return value or argument, but I guess that is not big deal. So immidiate task for me after finding this is fix the link in documentation to point to wsme.signature and probably put some note some where in documentation about above finding.

13 May 2014

Vasudev Kamath: Enabling DNSSEC for copyninja.info

Recently I've been seeing lot of posts about DNSSEC on Internet and I thought I should configure my domain to be secured by DNSSEC. copyninja.info domain is now secured with DNSSEC you can verify this by DNSSEC analyzer by Verisign and DNSViz online tool or by installing DNSSEC validator addon for your browser. There are good amount of tutorials and guides available to enable DNSSEC for your domain, still I want to note down steps I followed to here for the record (of course it will be helpful for me if I forget it ;-)) First step will be installing bind9 and dnssec-tools package, if you use aptitude installing dnssec-tools will pull down the bind9 unless you have configured aptitude to not install the Recommends. Next setting up the zone file for your domain, for this first make a copy of /etc/bind/db.local as /etc/bind/db.example.com, replace example.com with your domain name. Now you need to add your zone records to the zone file. Next edit the /etc/bind/named.conf.local file and add following lines
zone "example.com"  
    type master;
    file "/etc/bind/db.copyninja.info";
    allow-transfer  secondary; ;
 ;
Here replace secondary with your secondary DNS servers, if you don't have one you can ommit this but its always recommended to have secondary DNS servers for a zone, in cases when primary fails. After this we need to enable DNSSEC on bind, this is done by editing the file /etc/bind/named.conf.options. Add following lines into options section.
dnssec-validation yes;
dnssec-enable yes;
dnssec-lookaside auto;
A more explanation on this can be found on Linux Journal article. Now its time to create DNSSEC keys and sign your zone, more about different DNSSEC keys and records can be found in the Linux Journal Article on implementation. I used zonesigner utility from dnssec-tools which does job of signing and including KSK and ZSK keys into bind configuration which otherwise should be done manually. Here is the command line I used for generating keys, thanks to Jonas for this.
mkdir -p /etc/bind/keys
zonesigner -algorithm RSASHA256 -keydirectory /etc/bind/keys\
       -dsdir /etc/bind/keys -archivedir /etc/bind/keys/archive \
       /etc/bind/db.example.com
Here we store our keys into /etc/bind/keys directory, and use RSASHA256 algorithm for key generation which is more stronger than the default used RSASHA1 (atleast thats what Jonas told me). This will create ZSK and KSK for the signing zone and creates a signed zone file db.example.com.signed in same directory as original zone file. Now all you need to do is replace the zone file from db.example.com to db.example.com.signed in file directive with your named.conf.local file.
Note that this keys expire after 30 days so you need to resign your zone before 30 days. For resigning just run zonesigner from /etc/bind/keys. You can setup cron job to do this periodically.
zonesigner -zone example.com /path/to/db.example.com
Our signed zone is ready but we are not done yet! For DNSSEC to work others should trust your signed record for this you need to register your public keys with registrar for your domain and this can be done via your domain provider (in my case this is Gandi). You need to check your domain name providers documentation on how to do this. For Gandi users there is a nice documentation.

2 May 2014

Vasudev Kamath: Note to Self: How to use url_for in Flask application

url_for is normally used to avoid hard coding of URL in Flask based webapps. I've been using it in SILPA port written using Flask by. But this was written long back and written as POC to show for Santhosh author of SILPA app which even managed to go into production :-). Recently I started organizing the code and started using flask.Blueprint which was previously written using flask.views.MethodView class and here I started facing problem with old templates. I'm just gonna explain what is the difference when using url_for with MethodView and Blueprint. Lets have some sample code for MethodView first, this code is from Flask documentation.
from flask.views import MethodView
class UserAPI(MethodView):
       def get(self):
           users = User.query.all()
           ...
       def post(self):
           user = User.from_form_data(request.form)
           ...
app.add_url_rule('/users/', view_func=UserAPI.as_view('users'))
Here we have a view which is derived from MethodView which has logic on how to handle the requests. We use add_url_rule to register rule to handle /users/ url end point and we pass the class as view function which is done by as_view class method and we can refer this method in our Jinja2 templates using name users. So a statement like
url_for('users')
in our templates will be converted to /users/ URL when page is rendered to client by Flask. Now when I replaced the MethodView in favor of Blueprint I started getting up werkzeug.routing.BuildError thrown on my face and I had no clue why!. Yeah I know I'm bad at reading documentation but even after reading documentation I was still thinking that
url_for('/')
should return me a proper URL and I was wondering why its failing. Finally after re-reading documentation for url_for it was becoming clear to me, url_for definition looks like below
flask.url_for(endpoint, **values)
Here endpoint is actually a function which is supposed to be serving the URL and **values is the arguments for this function. The URL in question should be defined in python code using decorator. In my case following is the new function serving the web pages for SILPA.
bp = Blueprint('frontend', __name__)
@bp.route(_BASE_URL, defaults= 'page': 'index.html' )
@bp.route(_BASE_URL + '<page>')
def serve_pages(page):
       if page == "index.html":
          return render_template('index.html', title='SILPA',
                               main_page=_BASE_URL,
                               modules=_modulename_to_display)
       elif page == "License":
          return render_template('license.html', title='SILPA License',
                                main_page=_BASE_URL,
                                modules=_modulename_to_display)
       elif page == "Credits":
           return render_template('credits.html', title='Credits',
                                main_page=_BASE_URL,
                                modules=_modulename_to_display)
       elif page == "Contact":
           return render_template('contact.html', title='Contact SILPA Team',
                                main_page=_BASE_URL,
                                modules=_modulename_to_display)
       else:
           # modules requested!.
           if page in _display_module_map:
               return render_template(_display_module_map[page] + '.html',
                                    title=page, main_page=_BASE_URL,
                                    modules=_modulename_to_display)
           else:
               # Did we encounter something which is not registered by us?
               return abort(404)
You can ignore function code but just note the decorators, here I'm registering the function serve_pages with page as argument for URL patterns / and /<page>, _BASE_URL here is mount point of application it can be just / or /mountpoint depending on that URL registered changes. Now I need to modify code for all url_for in my template to look like below
url_for('.serve_pages', page='/License') # for /License
url_for('.serve_pages') # which will turn in to /index.html
The . in front of function is for referring current Blueprint, in my case the Flask will consider it as frontend.serve_pages as function name and generates appropriate URL at run time. So my fault was misunderstanding endpoint argument as URL endpoint but where as its actually function name supposed to serve the page. But when using MethodViews I can simply convert class to a function with my preferred name just like UserAPI.as_view('/') so url_for('/') just works.

7 April 2014

Vasudev Kamath: Loading Python modules/plug-ins at runtime

Some times it is desired to load arbitrary python files or pre- installed python modules during application run time.I had encountered 2 such usecases, one is in SILPA application and other is dictionary-bot which I was refactoring recently.
Case 1: Loading installed python module In case of SILPA I need to load pre-installed modules and here is the old code , that is a bit hacky code I copied from Active State Python recipies. I found a bit better way to do it using importlib module as shown below.
from __future__ import print_function
import sys
import importlib
def load_module(modulename):
    mod = None
    try:
        mod = importlib.import_module(modulename)
    except ImportError:
        print("Failed to load  module ".format(module=modulename),
                     file=sys.stderr)
    return mod
Here importlib itself takes care of checking if modulename is already loaded by checking sys.modules[modulename], if loaded it returns that value, otherwise it loads the module and sets it to sys.modules[modulename] before returning module itself.
Case 2: Loading python files from arbitrary location In case of dictionary bot my requirement was bit different, I had some python files lying around in a directory, which I wanted to plug into the bot during run time and use them depending on some conditions. So basic structure which I was looking was as follows.
      pluginloader.py
      plugins
       
       __ aplugin.py
       
       __ bplugin.py
pluginloader.py is the file which needs to load python files under plugins directory. This was again done using importlib module as shown below.
import os
import sys
import re
import importlib
def load_plugins():
    pysearchre = re.compile('.py$', re.IGNORECASE)
    pluginfiles = filter(pysearchre.search,
                           os.listdir(os.path.join(os.path.dirname(__file__),
                                                 'plugins')))
    form_module = lambda fp: '.' + os.path.splitext(fp)[0]
    plugins = map(form_module, pluginfiles)
    # import parent module / namespace
    importlib.import_module('plugins')
    modules = []
    for plugin in plugins:
             if not plugin.startswith('__'):
                 modules.append(importlib.import_module(plugin, package="plugins"))
    return modules
Above code first searches for all python file under specified directory and creates a relative module name from it. For eg. file aplugin.py will become .aplugin. Before loading modules itself we will load the parent module in our case plugins, this is because relative imports in python expects parent module to be already loaded. Finally for relative imports to work with importlib.import_module we need to specify parent module name in package argument. Note that we ignore files begining with __, or specifically we don't want to import __init__.py, this will be done when we import parent module. The above code was inspired from a answer on StackOverflow, which uses imp module, I avoided imp because its been deprecated from Python 3.4 in favor of importlib module.

12 March 2014

Vasudev Kamath: Working around type system of Go with unsafe

Go language has a strong type system unlike C, and some times this will be head ache when we want to interact with C data types with Cgo or just to convert a Go type to lets say byte slice. I recently faced the same problem and after poking around things, I learned the Go provides unsafe package which can be used to work around the Go's type system.
Problem Recently I started using Cgo to use some C library I had to write some tools for development and testing. The reason I chose Go for this was prototyping and writing some quick tools is much easier in Go than done in C. The C library had some function which takes pointer to array types and fills it with some values. The problem I was facing here was how to create a C array in Go. Go does have array but its largely used as internal representation for much efficient type called slice and I can't directly cast a byte slice into an C array. Second problem I had was I had to store arbitrary Go types like float (float32,float64) and int (int32, int64) etc. into C array. So in brief, the problems that needed to solve are
  1. Find a way to convert byte slice from Go into C array and vice versa.
  2. Find a way to convert and store Go types into a C array.
Solution Basically C array are sequence of memory location which can be statically or dynamically allocated. In Cgo its possible to access C standard library functions for memory allocation, so why not use it. The memory allocation function then returns the pointer to starting of allocated memory, we can use this pointer to write Go's bytes into the memory location and bytes from memory location into Go's slice. The pointer returned by C allocation functions are not directly usable for memory dereferencing in Go, here is where unsafe package kicks in. We will cast the return of C allocation function as unsafe.Pointer type and from the documentation of unsafe package,
  1. A pointer value of any type can be converted to a Pointer.
  2. A Pointer can be converted to a pointer value of any type.
  3. A uintptr can be converted to a Pointer.
  4. A Pointer can be converted to a uintptr.
So we can then cast unsafe.Pointer to uintptr which is the Go type which is large enough to hold any memory address in Go and can be used for pointer arithmetic just like we do in C (of course with some more castings). Below I'm pasting a simplified code in C which I wrote for this post.
#ifndef __BYTETEST_H__
#define __BYTETEST_H__
typedef unsigned char UBYTE;
extern void ArrayReadFunc(UBYTE *arrayout);
extern void ArrayWriteFunc(UBYTE *arrayin);
#endif
#include "bytetest.h"
#include <stdio.h>
#include <string.h>
void ArrayReadFunc(UBYTE *arrayout)
 
     UBYTE array[20] =  1, 2, 3, 4,5, 6, 7, 8, 9, 10,
                        11, 12, 13, 14, 15, 16, 17,
                        18, 19, 20 ;
     memcpy(arrayout, array, 20);
 
void ArrayWriteFunc(UBYTE *arrayin)
 
     UBYTE array[20];
     memcpy(array, arrayin, 20);
     printf("Byte slice array received from Go:\n");
     for(int i = 0; i < 20; i ++) 
             printf("%d ", array[i]);
      
     printf("\n");
 
Functions are written just for this post and they don't really do anything. As you can see ArrayReadFunc takes a pointer to array and fills it with content of another array using memcpy. Function ArrayWriteFunc on other hand takes pointer to array and copies its content to internal array. I've added print logic to ArrayWriteFunc just to show that values passed from Go are making it here. Below is the Go code which uses the above C files passes byte slice to get value out of C code and array made of byte slice to C function to send values in.
package main
/*
#cgo CFLAGS: -std=c99
#include "bytetest.h"
#include <stdlib.h>
*/
import "C"
import (
     "fmt"
     "unsafe"
)
func ReadArray() unsafe.Pointer  
       var outArray = unsafe.Pointer (C.calloc(20,1))
       C.ArrayReadFunc((*C.UBYTE)(outArray))
       return outArray
 
func WriteArray(inArray unsafe.Pointer)  
       C.ArrayWriteFunc((*C.UBYTE)(inArray))
 
func CArrayToByteSlice(array unsafe.Pointer, size int) []byte  
       var arrayptr = uintptr(array)
       var byteSlice = make([]byte, size)
       for i := 0; i < len(byteSlice); i ++  
               byteSlice[i] = byte(*(*C.UBYTE)(unsafe.Pointer(arrayptr)))
               arrayptr ++
        
       return byteSlice
 
func ByteSliceToCArray (byteSlice []byte) unsafe.Pointer  
       var array = unsafe.Pointer(C.calloc(C.size_t(len(byteSlice)), 1))
       var arrayptr = uintptr(array)
       for i := 0; i < len(byteSlice); i ++  
              *(*C.UBYTE)(unsafe.Pointer(arrayptr)) = C.UBYTE(byteSlice[i])
              arrayptr ++
        
       return array
 
func main() 
        carray := ReadArray()
        defer C.free(carray)
        carraybytes := CArrayToByteSlice(carray, 20)
        fmt.Println("C array converted to byte slice:")
        for i := 0; i < len(carraybytes); i ++  
                fmt.Printf("%d ", carraybytes[i])
         
        fmt.Println()
        gobytes := []byte 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
                31, 32, 33, 34, 35, 36, 37, 38, 39, 40 
        gobytesarray := ByteSliceToCArray(gobytes)
        defer C.free(gobytesarray)
        WriteArray(gobytesarray)
 
Functions ReadArray and WriteArray are just wrapper to the calls to C counter parts ArrayReadFunc and ArrayWriteFunc. ReadArray returns unsafe.Pointer which is allocated C array and should be freed by caller. WriteArray takes unsafe.Pointer which is pointing to memory location containing C array. Now the functions of interest are CArrayToByteSlice and ByteSliceToCArray. It should be pretty clear from the above code to understand what is happening in these functions. Still I will just put explain them briefly. ByteSliceToCArray allocates a C array using calloc from C standard library. It then creates a uintptr, a pointer type in Go which is used to dereference the each memory location and store bytes from the input byte slice in them. CArrayToByteSlice on other hands creates a uintptr type by casting input unsafe.Pointer type and then uses this pointer type to dereference values from memoy and store it in byte slice with suitable casting. So lets build the code and run it and see the output:
C array converted to byte slice:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Byte slice array received from Go:
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
So yes it actually works and values are moving across C and Go. This solves first problem in hand next is converting arbitrary Go types into byte slices. There are many cases where we would like to convert an arbitrary Go types like (int, float) into bytes. One such use case I found was when writing a TCP client for communicating with a Server written using C speaking custom protocol. Here I'm just going to show how to convert types like float, int to byte slice, I've not tried converting structures but it is certainly possible. Below is the function which can convert int32,float32 into byte slice it can also be extended for other types.
func CopyValueToByte(value interface ) []byte  
     var valptr uintptr
     var slice []byte
     switch t := value.(type)  
     case int32:
             i := value.(int32)
             valptr = uintptr(unsafe.Pointer(&i))
             slice = make([]byte, unsafe.Sizeof(i))
     case float32:
             f := value.(float32)
             valptr = uintptr(unsafe.Pointer(&f))
             slice = make([]byte, unsafe.Sizeof(f))
     default:
             fmt.Fprintf(os.Stderr,"Unsupported type: %T\n", t)
             os.Exit(1)
      
     for i := 0; i < len(slice); i++  
             slice[i] = byte(*(*byte)(unsafe.Pointer(valptr)))
             valptr++
      
     return slice
 
This function is generic which can take various types of value. First we will use Go's type assertion to determine the type and creates a uintptr pointer for the value and allocates byte slice depending on the size of the value as calculated using unsafe.Sizeof. Later it uses the pointer to dereference value from memory location and copies each byte into byte slice. The idea used here is every type is represented as certain number of bytes in the memory. Below is the entire program.
package main
import (
     "fmt"
     "unsafe"
     "os"
)
func CopyValueToByte(value interface ) []byte  
     var valptr uintptr
     var slice []byte
     switch t := value.(type)  
     case int32:
             i := value.(int32)
             valptr = uintptr(unsafe.Pointer(&i))
             slice = make([]byte, unsafe.Sizeof(i))
     case float32:
             f := value.(float32)
             valptr = uintptr(unsafe.Pointer(&f))
             slice = make([]byte, unsafe.Sizeof(f))
     default:
             fmt.Fprintf(os.Stderr,"Unsupported type: %T\n", t)
             os.Exit(1)
      
     for i := 0; i < len(slice); i++  
             slice[i] = byte(*(*byte)(unsafe.Pointer(valptr)))
             valptr++
      
     return slice
 
func main()  
     a := float32(-10.3)
     floatbytes := CopyValueToByte(a)
     fmt.Println("Float value as byte slice:")
     for i := 0; i < len(floatbytes); i++  
             fmt.Printf("%x ", floatbytes[i])
      
     fmt.Println()
     b := new(float32)
     bptr := uintptr(unsafe.Pointer(b))
     for i := 0; i < len(floatbytes); i++  
             *(*byte)(unsafe.Pointer(bptr)) = floatbytes[i]
             bptr++
      
     fmt.Printf("Byte value copied to float var: %f\n", *b)
 
The above conversion can also be achieved using encoding/binary package provided by Go. But its been told to me that it makes things pretty slow.
Conclusion So goes unsafe.Pointer is really powerful thing which allows us to work around the Go's type system but as package documentation says it should be used with care
PS: I'm not really sure if its recommended to use allocation functions from C standard library, I will wait for expert gophers to comment on that.

1 March 2014

Vasudev Kamath: Installing and Configuring NetBSD on Qemu in Debian

First step will be getting a NetBSD ISO image for installation purpose. It can be downloaded from here. Next step will be creating a disk image for installing NetBSD. This is done using qemu-img tool like below
$ qemu-img create -f raw netbsd-disk.img 10G
Image format used is raw and size is specified as last arugment. Tune the size as per your need. To start the installation run the following command
$ qemu-system-x86_64 -m 256M -hda netbsd-disk.img -cdrom \
             NetBSD-6.1.3-amd64.iso -display curses -boot d \
             -net nic -net user
So this is using user mode networking so you won't be able to have internet access during installation. I couldn't figure out on how to get network working during installation so I configured network after installation. Once you run above command you will be given with 4 options as follows.
  1. install netbsd
  2. install netbsd with ACPI disabled
  3. install netbsd with ACPI and SMP disabled
  4. drop to boot shell
Even though I first installed using the option 1 I couldn't get it boot after installation so had to reinstall with option 2 and it works fine. I'm not gonna explain each step of installation here because the installer is really simple and straight forward! I guess the NetBSD installer is the first simplest installer I have encountered from the day I started with Linux. Its simple but powerful and gets job done very easily, and I didn't read manual for installation before using it.
Enabling Networking This section involves mixture of configuration in Debian host and inside NetBSD to get the network working. The wiki page of Debian on Qemu. helped me here. To share the network with Qemu there are 2 possiblities
  1. Bridged networking between host and guest using bridge-utils
  2. Using VDE (Virtual Distributed Ethernet)
The option 1 which is explained in wiki linked above didn't work for me as I use CDC Ether based datacard for connecting to Internet which gets detected as eth1 on my machine. When bridging happens between tap0 and eth1 I end up loosing Internet on my host machine. So I selected to use VDE instead. First install packages vde2 and uml-utilities once done edit the /etc/network/interfaces file and add following lines:
auto vdetap
iface vdetap inet static
   address 192.168.2.1
   netmask 255.255.255.0
   vde2-switch -t vdetap
We can use dnsmasq as a DHCP server for the vdetap interface, but I'm not gonna explain the configuration of dnsmasq here. Run the below command to get vdetap up
modprobe tun
ifup vdetap
/etc/init.d/dnsmasq restart
newgrp vde2-net # run as user starting Qemu VM's
I couldn't get successful output for newgrp command, I was getting some crypt: Invalid argument output. But I could still get network working on NetBSD so I considered to ignore that for now. Now start the NetBSD qemu instance running using following command
$ qemu-system-x86_64 -m 256M -hda \
             /mnt/kailash/BSDWorld/netbsd-disk.img \
             -net nic -net vde,sock=/var/run/vde2/vdetap.ctl -display curses
Once the system is up login using root user, NetBSD will warn you for this and suggest to create another user but for now ignore it. To find the network interface in NetBSD just run the usual ifconfig command. In my case interface is named wm0. First step will be configuring the IP address for your interface and setting up the gateway route. Run the below command for this purpose
# ifconfig wm0 192.168.2.2 netmask 255.255.255.0
# route add default 192.168.2.1
Note that I added gateway as IP address of vdetap on my host machine. Now try pinging the host and even you can try ssh to host system. But note that this is not persistent over the reboots and for some reason I didn't yet figure out how to make NetBSD get address over DHCP from my host machine. I will update once I figure it out. Now to make the connection address persistent over reboots you need to create a file by name /etc/ifconfg.<interface>. Replace interface with a proper interface on your running NetBSD. In my case this file is /etc/ifconfig.wm0 and has following content.:
192.168.2.2 netmask 0xffffff00 media autoselect
Set the DNS server as host by adding file /etc/resolv.conf with following content.:
nameserver 192.168.2.1
After this you need to do NAT/masquerading using iptables. Just copy following script to a file and execute it as root
#!/bin/sh
# Restart the dnsmasq
/etc/init.d/dnsmasq restart
# Set nat rules in iptables
iptables --flush
iptables --table nat --flush
iptables --delete-chain
iptables --table nat --delete-chain
# Replace accordingly usb0 with ppp0 for 3G
iptables --table nat --append POSTROUTING --out-interface eth1 -j MASQUERADE
iptables --append FORWARD --in-interface vdetap -j ACCEPT
# Enable IP forwarding in Kernel
sysctl -w net.ipv4.ip_forward=1
With the above setup you will be able to get DNS resolution even after you reboot the Qemu instance but Internet connection will not work untill you run the route command I mentioned above. I still didn't figure out how to persist route but I will update it here once I figure it out.
Note that you won't be able to SSH as root to NetBSD (may be its configured to not allow by default), so you would need to create a normal user before trying to SSH from host to guest. Also make sure you add the user to wheel group to allow him execute su command.
So now I've NetBSD running on my laptop with mere 256M ram and its amazingly fast even at such low RAM. I've created a new user and can SSH into it from host machine and use it just like I use a server!. I will put up the notes of my BSD adventure here periodically. The feeling of using a BSD is amazing :-)
Update: I forgot to add masquerading step, I've added it now.

Vasudev Kamath: Gzipped response for CSS/JS with Apache and mod_uwsgi

So it may sound normal to receive compressed response for CSS/JS file from Apache2 but what I faced is completely different behavior, here I saw not just body of response but also header are getting compressed and send to browser and browser was unable to interpret the response there by leading page rendered without CSS and JS. That was long story short, so let me explain in detail the case and what I found out. I have hosted SILPA on my VPS using uWSGI. I had used till date using libapache2-mod-proxy-uwsgi plugin for which uWSGI should be using network socket and in Apache2 config I need ProxyPass directive pointing to uwsgi://host:port and everything used to work out of box. There is a drawback for using network socket in uWSGI that is limited amount of ports (65535) and possible clash with other services if we use wrong port number, additionally network sockets will be slower compared to the file socket so yesterday I thought of changing it to file socket and removed socket directive in the ini file for uWSGI application container for SILPA and here is where all problems started. My updated configuration can be seen in our documentation page that is if you are interested to look my configuration file :-).
Problem Reporting We got selected to Gsoc again this year and we have lot of students jumping in our channel #silpa on irc.freenode.net and yesterday night some students reported they are facing 500 internal server error and subsequently others mentioned there is no theme being rendered. I checked and found everything was fine for me only to notice that my browser was using cached content, and when I clear cache there I go everything vanished, no theme no css no javascript just plain html!!!.
Investigation So I thought I should investigate this and fired up firebug on my iceweasel to observe the network traffic. And below is what I observed in firebug network console!. No response headers So you can see no response header in the above image but there is a response and below how is response content visible on firebug. gzipped response Still weird part was when using wget/curl directly to get the CSS I was getting correct reply and the file, which is puzzling. So I went ahead opened the CSS url directly using browser and saved the resulting file. When I used file command on it, output suggested the saved file in gzip compressed!. I went ahead uncompressed it and opened the file and inside it I found response header and the response body! So what just happened? Did apache some how manage to compress entire respone not just response body? The more puzzling question was why did it work when I was using mod_proxy_uwsgi and failed when I switched to mod_uwsgi. I had no clue at this point why this behavior is coming up.
Possible Solution I was still not sure how to resolve this, and started searching in the network but nothing was coming up in search result which can explain me this. And finally I stumbled on a link which is totally unrelated but I saw the word mod_deflate in the content and wandered on my system to see if its enabled. Yes mod_deflate was enabled I just opened the conf file and saw following.
<IfModule mod_deflate.c>
       # these are known to be safe with MSIE 6
       AddOutputFilterByType DEFLATE text/html text/plain text/xml
       # everything else may cause problems with MSIE 6
       AddOutputFilterByType DEFLATE text/css
       AddOutputFilterByType DEFLATE application/x-javascript application/javascript application/ecmascript
       AddOutputFilterByType DEFLATE application/rss+xml
</IfModule>
Interesting so it suggests to compress css, javascript files etc. So I thought possibly its compressing the result of mod_uwsgi, yeah possibly so why not try disabling mod_deflate and check if it works well and good if not I'm not gonna loose anything. So I disabled mod_deflate and voila! Everything is working fine!
What might be the reason for this behavior? Well I'm not exactly sure but here is my assumption. The whole behavior depends on internal implementation of mod_uwsgi and mod_proxy_uwsgi module. To confirm this I switched back to use mod_proxy_uwsgi module and enabled mod_deflate and observed request response in Firebug network console and below is what I observed. Response with mod_proxy_uwsgi So response is still gziped but this time only response body was gziped and Content-Encoding field was set properly which makes browser to properly uncompress the response body and use it. So it seems like there is a difference between mod_uwsgi and mod_proxy_uwsgi implementation. mod_uwsgi was sending the response along with response headers which was compressed by mod_deflate there by rendering browser helpless to interpret the response. But mod_proxy_uwsgi seems to be only sending the response content without response body and this content was later compressed by mod_deflate and proper response headers were set by apache before sending it to browser. So now whose fault is this? Is it the bug in mod_uwsgi or, mod_deflate not being able to exclude response headers from compression? I've no clue! If you have a clue and you want to share it with me, please consider writing to me over email. My details are available in Contacts

22 February 2014

Vasudev Kamath: First experience of driving in Bangalore

So finally I did it, I managed to ride the 2-wheeler in Bangalore roads!. Officially I've got a license to drive 2 and 4 wheelers back in 2011 but I was never confident enough to ride the vehicle on road and in traffic. Partially this is because I don't own a vehicle and I've never practiced driving, and partially because of road and traffic conditions in India I was bit afraid to do so. So today morning my roommate and me planned to go to temple in the early morning. We got up at 530 in the morning and we were ready to drive to Mahalaxmi Layout at 615. Since during this time traffic is very less I asked my roommate shall I drive and he agreed!. So I drove the bike from Malleshwaram to Mahalaxmi Layout and it was fun. I did had hick up in uphill driving as my mind was not co-ordinating well with breaks and gear. I did have hickups in first gear too but I guess this will be alright once I get hold of things. I'm still not having enough confidence to drive during the peak hours because people are not at all following traffic rules. They just change the lane without any hints or just come to middle of road from side roads without looking at other side to see if there is a vehicle is coming from other side of the road (yes they don't have courtesy to wait). And finally they can shout at persons who is actually following rules. In general very less people have traffic discipline in India. But I will adapt to this once I start driving so I'm now all set to get my 2 wheeler soon and ride on!

21 February 2014

Vasudev Kamath: Kontalk FLOSS alternative for Whatsapp and Co.

So Whatsapp has been acquired by Facebook and this news is still hot and people are discussing it all over the twitterverse. So I took this opportunity to stop using Whatsapp and removed it from my phone. Possibly I could have deleted my account but who cares. Anyway I've been searching for a secure and FLOSS alternative for Whatsapp for quite some time now, few days ago I found about Telegram but after reading post by Tincho on planet Debian, I decided not to use it. Recently while going through the talk list for fossmeet.in I found link for Kontalk in a privacy awareness talk proposal by Praveen and thought I should give it a try. So below is my first feelings about Kontalk.
Installation and Activation Kontalk can be installed from playstore for verification purpose it requests your phone number and country code and request for verification. This should send a SMS with a code which should be entered in the text box given and app is ready to use. There is also a possibility to use pre-existing verification code (if you got one from developer directly, read below for details). I did see some glitches like I never got the SMS delivered to my phone after 2 attempts and a days wait. Then I went ahead and reported a bug and the main developer was quick to respond. After a discussion it was noticed that SMS was blocked by spam filters. He also mentioned its tough to get SMS delivered to India. He was kind enough to provide me with a verification code and I used the Use existing code option to enter it and get Kontalk activated. The SMS delivery inconsistency is still present for India (and may be other nations too) some people get code immidiately others may be after couple of days and some might not. Upstream is already working on a possible workaround.
User Experience Now coming to usage part, the UI is neat and clean, I won't say super polished as Whatsapp or other popular apps but its really neat and easy to use. Some points which I like are
  • Ability to hide presence, so others won't know you are online or offline. (unlike Whatsapp which advertises last seen)
  • Encrypted messages and ability to optin or optout.
  • Encrypted status messages! Only user with your phone number can see your status. (Cool isn't it).
  • Manually requesting to find contacts who already uses kontalk!. Right it doesn't read your contact list without your permission you need to refresh to check who in your contacts is using Kontalk.
  • Attach and smiley options in the top right corner of Chat window which allows easy accessing unlike keyboard - smiley switching of Whatsapp.
  • No automatic download of media contents which is shared. Yes by default it doesn't download any picture or video automatically if you want to see something click on it and select download.
  • Running your own server for Kontalk! Now thats something which is interesting for people who doesn't want to host their data on some other peoples infrastructure. Code for server is available at xmppserver repo.
But there are some rough edges also but I'm sure this can be improved. But some points which I noticed are
  • Contact name disappears and only number is displayed. This is something which happened with one of my contact so I'm not really sure its a bug.
  • My friend noticed all his existing contacts suddenly vanished when he refreshed contacts list. Again this is possible bugs and we are considering reporting it upstream.
  • No group chats yet. I don't see a option to do that yet.
  • Attachment at the moment is restricted only to pictures (and video? never tried) and uploading takes quite sometime and sometime hangs forever.
So I'm considering forwarding these to upstream and help them by providing enough data so these can be fixed.
Technical Side All code for client server and protocol specs are available under GPL-v3 at the Kontalk project site. Server software is written in Python and I guess uses XMPP (but I've not cross verified). Server also uses MySQL as database. These can be hosted on our own servers but possibly needs more than that like SMS sending options etc.
Conclusion In my views Kontalk can become a great alternative for Whatsapp and co from Free Software world and I encourage every one to give it a try which will be the first step to help improving it.
Disclaimer: I'm not a privacy or security expert so whatever I shared above are what I noticed and experts may see something different than this. In any case I welcome comments and suggestions.

8 February 2014

Vasudev Kamath: Friendica instance on my VPS is down

I started running Friendica instance on my VPS. With help of Jonas Smedegaard I managed to run Friendica in a uWSGI container. The site was running at samsargika.copyninja.info and is no longer accessible. Since VPS itself was running Debian Wheezy I couldn't run uWSGI with PHP support on it. (support for PHP in uWSGI landed after Wheezy). But Jonas was kind enough for me to provide a backported version. Recently Wheezy got a security update for PHP and thats where all the problem started. The backported uwsgi-plugin-php was not recompiled to use security updated PHP and I couldn't upgrade the things. After few days I noticed first freeze on my VPS and I had to reboot the VPS to get it online again. The fact I noticed was uWSGI being killed due to a OOM in syslog but I didn't explore much and consulted Jonas to get a updated uWSGI. But that didn't happen as Jonas himself is facing some problem with uWSGI builds. While again checking with aptitude for upgrade I accidentally confirmed removal of uwsgi-plugin-php for getting security updates :-/. But nothing happened to my running service as upgrade of libs in Debian doesn't cause the restart of all services which are using that lib (desired effect is restart of service but I don't know the side affects involved). Second freeze happened yesterday and on the reboot uwsgi-plugin-php was missing there by taking my Friendica instance down. More closer investigation showed the same OOM but this time I noticed each OOM occurred just after cronjob was running poller.php a script which is actually causing all federation in Friendica. So it was clear there is something wrong either in poller.php or in my setup which was making it eat memory and freeze my VPS. But I also found some stupidity I did during configuration which Jonas also pointed me out.
  1. Installing cronjob inside crontab rather than cron.d
  2. Installing poller.php crontab for root user :-/
I basically violated the basic rule by running a unsafe script as root user, good that script didn't do some crazy stuff. So even though my instance went down I learnt my lessons
  1. Don't ever ever ever run a unsafe script as root that too through cron
  2. Sandbox the unsafe script so it can be killed on time rather than it taking the whole system down.
  3. PHP is not really secure, if it was secure there wouldn't be security updates and atleast my site would be still running :-D
So I now need to wait till Jonas get me new shiny backported uWSGI linked against new PHP on Wheezy and till that time I need to explore on how I can sandbox the poller.php script.

2 February 2014

Vasudev Kamath: Moving weblog from Jekyll to Pelican

Its been a while and I was not actively blogging. I wanted to start blogging again but I was for some reason not happy with Jekyll static site generator which was powering my previous site. So I took this chance to explore other static site generators and redesign my blog. I explored a bit of Pelican, Nikola and Frog. I didn't feel comfortable with Nikola. Frog is a static site generator written using *Racket* and since I'm learning Scheme using racket I was leaning towards using it but at the last moment I decided to settle with Pelican. The main reason for not using Frog was that I just wanted to get my site up rather than waiting for me to complete learning racket and use Frog. My feeling about Pelican is that its a nice tool and is simple and gives all basic things like generating RSS/ATOM feeds and nice theming support. In case of Jekyll I had to write custom page for creating RSS feed and had pain in creating my first theme (yeah I'm not a designer). But now things might have changed in Jekyll land but anyway I don't care anymore. New design uses pelican-bootstrap3 theme and there is no comment facility available. If you want to comment or suggesting things on my post consider writing mail to me :-). So what happened to older entries? Moving the old entries from Jekyll format to Pelican was real PITA so I just renamed my old site to blog-archive. So this is a from scratch site and will contain only new entries. So how do you like my new blog design? :-)

5 May 2013

Vasudev Kamath: CDBS Packaging: package relationship management

(It took me a while to come up with new CDBS packaging series post not because I stopped using CDBS just because I was procrastinating myself as busy) This is the second post in the CDBS packaging series. In this series I'm going to talk about package relationship management. The better example where this feature is useful is packages where build-depends and run-time dependencies overlap. Most of the Perl modules which have test suites have build-depend and run-time dependency intersection. So let me take example of a Perl module. First lets see control file of a Perl package which is not using CDBS and then let me explain how CDBS can help you improve the situation. I choose libxml-libxml-perl lets see part of control file which includes Build-Depends Depends Suggests Recommends.
Source: libxml-libxml-perl
Maintainer: Debian Perl Group <pkg-perl-maintainers@lists.alioth.debian.org>
Uploaders: Jonathan Yu <jawnsy@cpan.org>,
 gregor herrmann <gregoa@debian.org>,
 Chris Butler <chrisb@debian.org>
Section: perl
Priority: optional
Build-Depends: perl (>= 5.12),
 debhelper (>= 9.20120312),
 libtest-pod-perl,
 libxml2-dev,
 libxml-namespacesupport-perl,
 libxml-sax-perl,
 zlib1g-dev
Standards-Version: 3.9.4
Vcs-Browser: http://anonscm.debian.org/gitweb/?p=pkg-perl/packages/libxml-libxml-perl.git
Vcs-Git: git://anonscm.debian.org/pkg-perl/packages/libxml-libxml-perl.git
Homepage: https://metacpan.org/release/XML-LibXML/
Package: libxml-libxml-perl
Architecture: any
Depends: $ shlibs:Depends , $ misc:Depends , $ perl:Depends ,
 libxml-namespacesupport-perl,
 libxml-sax-perl
Breaks: libxml-libxml-common-perl
Replaces: libxml-libxml-common-perl
Description: Perl interface to the libxml2 library
 XML::LibXML is a Perl interface to the GNOME libxml2 library, which provides
 interfaces for parsing and manipulating XML files. This module allows Perl
 programmers to make use of the highly capable validating XML parser and the
 high performance Document Object Model (DOM) implementation. Additionally, it
 supports using the XML Path Language (XPath) to find and extract information.
So 2 packages are both in Build-Depends and Depends field
  1. libsax-xml-perl
  2. libxml-namespacesupport-perl
So in this situation there is a possibility that we miss to add one or both of these packages into Depends field, I'm not saying we will surely miss but we might after all we are all human beings. So how can we improve the situation using CDBS? Let me go through step by step on what we need to do.
  1. Create a file called control.in with same above contents but slight modification in Build-Depends and Depends section. I will display the diff below to avoid re-pasting entire file again and again.
--- debian/control      2013-04-28 23:08:11.930082600 +0530
+++ debian/control.in   2013-05-04 20:51:18.849680419 +0530
@@ -5,13 +5,7 @@
  Chris Butler <chrisb@debian.org>
 Section: perl
 Priority: optional
-Build-Depends: perl (>= 5.12),
- debhelper (>= 9.20120312),
- libtest-pod-perl,
- libxml2-dev,
- libxml-namespacesupport-perl,
- libxml-sax-perl,
- zlib1g-dev
+Build-Depends: @cdbs@
 Standards-Version: 3.9.4
 Vcs-Browser: http://anonscm.debian.org/gitweb/?p=pkg-perl/packages/libxml-libxml-perl.git
 Vcs-Git: git://anonscm.debian.org/pkg-perl/packages/libxml-libxml-perl.git
@@ -20,8 +14,7 @@
 Package: libxml-libxml-perl
 Architecture: any
 Depends: $ shlibs:Depends , $ misc:Depends , $ perl:Depends ,
- libxml-namespacesupport-perl,
- libxml-sax-perl
+ $ cdbs:Depends 
 Breaks: libxml-libxml-common-perl
 Replaces: libxml-libxml-common-perl
 Description: Perl interface to the libxml2 library
@@ -30,4 +23,3 @@
  programmers to make use of the highly capable validating XML parser and the
  high performance Document Object Model (DOM) implementation. Additionally, it
  supports using the XML Path Language (XPath) to find and extract information.
-
  1. Next we need to have something like below in the rule files
#!/usr/bin/make -f
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/rules/utils.mk
include /usr/share/cdbs/1/rules/upstream-tarball.mk
include /usr/share/cdbs/1/class/perl-makemaker.mk
pkg = $(DEB_SOURCE_PACKAGE)
deps = libxml-libxml-perl libxml-sax-perl
deps-test = libtest-pod-perl
CDBS_BUILD_DEPENDS +=, $(deps), $(deps-test)
CDBS_BUILD_DEPENDS +=, zlib1g-dev, libxml2-dev, perl (>= 5.12)
CDBS_DEPENDS_$(pkg) = , $(deps)
So basically we moved all Build-Depends and Depends to rules file. The common ones are placed in deps variable and assigned to both Build-Depends and Depends. CDBS uses following variables for package relationship management.
  1. CDBS_BUILD_DEPENDS: This variable helps you manage Build-Depends field and all you need to do is in your control.in files Build-Depends field put place holder @cdbs@
  2. CDBS_DEPENDS: This field can be used to manage Depends field of binary package for each binary package you need to have one CDBS_DEPENDS_pkgname variable with depends assigned to it. And in your control.in append $ cdbs:Depends to Depends field.
  3. CDBS_PROVIDES, CDBS_BREAKS, CDBS_RECOMMENDS, CDBS_PREDEPENDS, CDBS_SUGGESTS, CDBS_REPLACES: all these does the job what you think it does :-).
Other than CDBS_BUILD_DEPENDS all other variables work using substvars i.e. CDBS will put the respective substitutions in pkgname.substvars file which will be used during deb creation to replace things in control file. So to make CDBS generate new control file run the below command
DEB_MAINTAINER_MODE=1 fakeroot debian/rules debian/control
Basically this command needs to be executed before starting build process if you miss your changes will not be reflected into debian/control. Additionally this feature is Maintainer Mode helper tool because Debian policy prohibits change of debian/control during normal package build. So what is the benefit of using this feature of CDBS? I've listed some of them which I felt are obvious.
  1. When there is intersection in Build-Depends and Depends this feature from CDBS is helpful. As I shown above put all intersecting dependencies in common variable and appropriately assign them wherever it is required. Thus we can avoid possible missing of some of run-time dependencies due to human error.
  2. It is also possible that newer version of package requires specific version of some package (mostly libraries) and we updated build dependencies but we might forget to do the same in Depends, by using this feature we can make sure we will not miss such minute details.
One last thing I want to point out is if you are NMUing a CDBS package
NMUs need not (but are encouraged to) make special use of these tools. In particular, the debian/control.in file can be completely ignored.
Before closing down the post, If you find some mistake in the post please let me know either through comments or through the email. Soon I will be back with new CDBS recipes till then cya.

Next.

Previous.